26 research outputs found

    An Efficient Emergency, Healthcare, and Medical Information System

    Get PDF
    Many of the current Indian medical information and emergency systems are still paper-based and stand alone systems that do not fully utilize the Internet, multimedia, wireless and real time technologies. This project work focuses on developing an integrated Emergency, Healthcare, and Medical Information System (IEHMS) that can overcome many of the problems in the current systems. The main aim of this work is to incorporate the real-time technologies with medical emergency systems. Our proposed system can offer: SMS, MMS, Phone call, Email. A prototype for the proposed system is implemented using open source tools. The system will revolutionize the delivery of emergency medical services, like ambulance, first aid etc, The call center of Emergency Management and Research Institute took up the case and identified the exact location where the patient had collapsed. When attending a call, the user check for the similar calls and search for the ambulance which is located to that area and ambulance is staffed by: F0D8; One driver F0D8; One helper F0D8; One experienced paramedic F0D8; One trained medical officer, if required For efficient medical monitoring of the patient2019;s condition, the institute has qualified personnel with knowledge and skills sufficient to evaluate and stabilize patients with potentially lethal or disabling conditions

    Vision-Based Deep Web Data Extraction For Web Document Clustering

    Get PDF
    The design of web information extraction systems becomes more complex and time-consuming. Detection of data region is a significant problem for information extraction from the web page. In this paper, an approach to vision-based deep web data extraction is proposed for web document clustering. The proposed approach comprises of two phases: 1) Vision-based web data extraction, and 2) web document clustering. In phase 1, the web page information is segmented into various chunks. From which, surplus noise and duplicate chunks are removed using three parameters, such as hyperlink percentage, noise score and cosine similarity. Finally, the extracted keywords are subjected to web document clustering using Fuzzy c-means clustering (FCM)

    Vision-Based Deep Web Data Extraction For Web Document Clustering

    Get PDF
    The design of web information extraction systems becomes more complex and time-consuming. Detection of data region is a significant problem for information extraction from the web page. In this paper, an approach to vision-based deep web data extraction is proposed for web document clustering. The proposed approach comprises of two phases: 1) Vision-based web data extraction, and 2) web document clustering. In phase 1, the web page information is segmented into various chunks. From which, surplus noise and duplicate chunks are removed using three parameters, such as hyperlink percentage, noise score and cosine similarity. Finally, the extracted keywords are subjected to web document clustering using Fuzzy c-means clustering (FCM)

    Mining of Frequent OptimisticEstimations by Using Measured Techniques

    Get PDF
    Abstract In recent years the sizes of databases has increased rapidly. This has led toa growing interest in the development of tools capable in the automatic extractionof knowledge from data. The term Data Mining, or Knowledge Discovery inDatabases, has been adopted for a field of research dealing with the automaticdiscovery of implicit information or knowledge within databases.Several efficient algorithms have been proposed for finding frequentitemsets and the association rules are derived from the frequent itemsets, such as theApriori algorithm. These Apriori-like algorithms suffer from the coststo handle a huge number of candidate sets and scan the database repeatedly. A frequent pattern tree (FP-tree) structure for storing compressed and criticalinformation about frequent patterns is developed for finding the complete set of frequent itemsets. But this approachavoids the costly generation of a large number of candidate sets and repeated databasescans, which is regarded as the most efficient strategy for mining frequent itemsets.Finding of infrequent items gives the positive feed back to the Production Manager. In this paper, we are finding frequent and infrequent itemsets by taking opinions of different customers by using Dissimilarity Matrix between frequent and infrequent items and also by using Binary Variable technique. We also exclusively use AND Gate Logic function for finding opinions of frequent and infrequent items. After finding frequent and infrequent items the apply Classification Based on Associations (CBA) on them to have better classification

    A SCALABLE APPROACH TOWARDS MANAGEMENT OF CONSISTENT DATA IN CLOUD SETTING

    Get PDF
    A number of modern works spotlighted on preservation of identity privacy from public verifiers during auditing of shared data integrity. Towards ensuring of shared data integrity can be confirmed publicly, users within group need to work out signatures on the entire blocks in shared data. In our work we put forward Panda, which is a new public auditing method for the integrity of shared information with well-organized user revocation within cloud. This method is helpful and scalable, which indicates that it is not only competent to maintain a huge number of users to allocate data and but also proficient to handle numerous auditing tasks simultaneously with batch auditing. It is capable to sustain batch auditing by means of verifying numerous auditing tasks at the same time and is resourceful and secure for the duration of user revocation. By scheming of the proxy re-signature system with fine properties, which traditional proxy re-signatures do not contain, our method is constantly able to make sure reliability of shared data devoid of retrieving the total data from cloud
    corecore